Inferring missing links or detecting spurious ones based on observed graphs, known as link prediction, is a long-standing challenge in graph data analysis. With the recent advances in deep learning, graph neural networks have been used for link prediction and have achieved state-of-the-art performance. Nevertheless, existing methods developed for this purpose are typically discriminative, computing features of local subgraphs around two neighboring nodes and predicting potential links between them from the perspective of subgraph classification. In this formalism, the selection of enclosing subgraphs and heuristic structural features for subgraph classification significantly affects the performance of the methods. To overcome this limitation, this paper proposes a novel and radically different link prediction algorithm based on the network reconstruction theory, called GraphLP. Instead of sampling positive and negative links and heuristically computing the features of their enclosing subgraphs, GraphLP utilizes the feature learning ability of deep-learning models to automatically extract the structural patterns of graphs for link prediction under the assumption that real-world graphs are not locally isolated. Moreover, GraphLP explores high-order connectivity patterns to utilize the hierarchical organizational structures of graphs for link prediction. Our experimental results on all common benchmark datasets from different applications demonstrate that the proposed method consistently outperforms other state-of-the-art methods. Unlike the discriminative neural network models used for link prediction, GraphLP is generative, which provides a new paradigm for neural-network-based link prediction.
translated by 谷歌翻译
Gradient-based explanation is the cornerstone of explainable deep networks, but it has been shown to be vulnerable to adversarial attacks. However, existing works measure the explanation robustness based on $\ell_p$-norm, which can be counter-intuitive to humans, who only pay attention to the top few salient features. We propose explanation ranking thickness as a more suitable explanation robustness metric. We then present a new practical adversarial attacking goal for manipulating explanation rankings. To mitigate the ranking-based attacks while maintaining computational feasibility, we derive surrogate bounds of the thickness that involve expensive sampling and integration. We use a multi-objective approach to analyze the convergence of a gradient-based attack to confirm that the explanation robustness can be measured by the thickness metric. We conduct experiments on various network architectures and diverse datasets to prove the superiority of the proposed methods, while the widely accepted Hessian-based curvature smoothing approaches are not as robust as our method.
translated by 谷歌翻译
The security of artificial intelligence (AI) is an important research area towards safe, reliable, and trustworthy AI systems. To accelerate the research on AI security, the Artificial Intelligence Security Competition (AISC) was organized by the Zhongguancun Laboratory, China Industrial Control Systems Cyber Emergency Response Team, Institute for Artificial Intelligence, Tsinghua University, and RealAI as part of the Zhongguancun International Frontier Technology Innovation Competition (https://www.zgc-aisc.com/en). The competition consists of three tracks, including Deepfake Security Competition, Autonomous Driving Security Competition, and Face Recognition Security Competition. This report will introduce the competition rules of these three tracks and the solutions of top-ranking teams in each track.
translated by 谷歌翻译
Pessimism is of great importance in offline reinforcement learning (RL). One broad category of offline RL algorithms fulfills pessimism by explicit or implicit behavior regularization. However, most of them only consider policy divergence as behavior regularization, ignoring the effect of how the offline state distribution differs with that of the learning policy, which may lead to under-pessimism for some states and over-pessimism for others. Taking account of this problem, we propose a principled algorithmic framework for offline RL, called \emph{State-Aware Proximal Pessimism} (SA-PP). The key idea of SA-PP is leveraging discounted stationary state distribution ratios between the learning policy and the offline dataset to modulate the degree of behavior regularization in a state-wise manner, so that pessimism can be implemented in a more appropriate way. We first provide theoretical justifications on the superiority of SA-PP over previous algorithms, demonstrating that SA-PP produces a lower suboptimality upper bound in a broad range of settings. Furthermore, we propose a new algorithm named \emph{State-Aware Conservative Q-Learning} (SA-CQL), by building SA-PP upon representative CQL algorithm with the help of DualDICE for estimating discounted stationary state distribution ratios. Extensive experiments on standard offline RL benchmark show that SA-CQL outperforms the popular baselines on a large portion of benchmarks and attains the highest average return.
translated by 谷歌翻译
Image super-resolution is a common task on mobile and IoT devices, where one often needs to upscale and enhance low-resolution images and video frames. While numerous solutions have been proposed for this problem in the past, they are usually not compatible with low-power mobile NPUs having many computational and memory constraints. In this Mobile AI challenge, we address this problem and propose the participants to design an efficient quantized image super-resolution solution that can demonstrate a real-time performance on mobile NPUs. The participants were provided with the DIV2K dataset and trained INT8 models to do a high-quality 3X image upscaling. The runtime of all models was evaluated on the Synaptics VS680 Smart Home board with a dedicated edge NPU capable of accelerating quantized neural networks. All proposed solutions are fully compatible with the above NPU, demonstrating an up to 60 FPS rate when reconstructing Full HD resolution images. A detailed description of all models developed in the challenge is provided in this paper.
translated by 谷歌翻译
COVID-19的传播表明,在不同的城市和社区之间,传播风险模式不是同质的,各种异质特征会影响传播轨迹。因此,对于预测性大流行监测,至关重要的是,在城市和社区中探索潜在的异质特征,以区分其特定的大流行扩散轨迹。为此,这项研究创建了一个网络嵌入模型,捕获跨县的访问网络以及异质特征,以根据其大流行传播轨迹来发现美国县的集群。我们从3月3日至2020年6月29日(初始波浪)收集了2,787个县的位置智能特征。其次,我们构建了一个人类访问网络,该网络将县特征作为节点属性和县之间的访问作为网络边缘。我们的归因网络嵌入方法整合了跨县访问网络的类型学特征以及异质性特征。我们对属性网络嵌入进行了聚类分析,以揭示与四个县群相对应的差异风险轨迹的四种原型。随后,我们确定了四个功能是原型之间独特的传输风险模式的重要特征。归因的网络嵌入方法和发现识别并解释了整个县的非殖民性大流行风险轨迹进行预测性大流行监测。这项研究还为大流行分析的基于数据驱动和深度学习的方法有助于补充大流行病政策分析的标准流行病学模型。
translated by 谷歌翻译
作为自动驾驶系统的核心部分,运动计划已受到学术界和行业的广泛关注。但是,由于非体力学动力学,尤其是在存在非结构化的环境和动态障碍的情况下,没有能够有效的轨迹计划解决方案能够为空间周期关节优化。为了弥合差距,我们提出了一种多功能和实时轨迹优化方法,该方法可以在任意约束下使用完整的车辆模型生成高质量的可行轨迹。通过利用类似汽车的机器人的差异平坦性能,我们使用平坦的输出来分析所有可行性约束,以简化轨迹计划问题。此外,通过全尺寸多边形实现避免障碍物,以产生较少的保守轨迹,并具有安全保证,尤其是在紧密约束的空间中。我们通过最先进的方法介绍了全面的基准测试,这证明了所提出的方法在效率和轨迹质量方面的重要性。现实世界实验验证了我们算法的实用性。我们将发布我们的代码作为开源软件包,目的是参考研究社区。
translated by 谷歌翻译
由于难以收集配对的现实世界训练数据,因此图像deraining目前由监督学习主导,并通过Photoshop渲染生成的合成数据。但是,由于合成数据和现实世界数据之间的差距,通常限制了对真实下雨场景的概括。在本文中,我们首先从统计学上探讨了为什么监督模型不能很好地推广到真实的雨天,并找到合成和真实雨水数据的实质差异。受我们的研究的启发,我们建议通过从其他连接的任务中学习有利的代表来消除雨水。在连接的任务中,可以轻松获得真实数据的标签。因此,我们的核心思想是通过任务传输从真实数据中学习表示形式,以改善概括。因此,我们将学习策略称为\ textit {任务传输学习}。如果有多个连接的任务,我们建议通过知识蒸馏降低模型大小。连接任务的预处理模型被视为教师,他们的所有知识都被蒸馏到学生网络,以便我们减少模型规模,同时保留所有连接的任务中有效的先前表示。最后,学生网络对少数配对的合成雨数据进行了微调,以指导预定的先前表示以去除雨水。广泛的实验表明,提出的任务转移学习策略令人惊讶地成功,并与最先进的监督学习方法相比,并显然超过了其他半监督者在合成数据上的方法。特别是,它显示出对现实世界的概括性的概括。
translated by 谷歌翻译
为了自动纠正手写作业,传统方法是使用OCR模型来识别字符并将其与答案进行比较。 OCR模型在识别手写的汉字时很容易混淆,并且在模型推断过程中缺少答案的文本信息。但是,教师总是考虑到这些答案来审查和纠正作业。在本文中,我们专注于中国披肩测试校正并提出一种多模式方法(命名为AIM)。答案的编码表示与学生笔迹的视觉信息进行了交互。我们没有预测“正确”或“错误”,而是在答案文本上执行序列标记,以推断哪个答案字符与手写内容以细粒度的方式不同。我们将OCR数据集的样本作为此任务的正样本,并开发一种负面样本增强方法来扩展培训数据。实验结果表明,目标的范围优于基于OCR的方法。广泛的研究证明了我们多模式方法的有效性。
translated by 谷歌翻译
特洛伊木马攻击对AI系统构成了严重威胁。有关变压器模型的最新著作获得了爆炸性的流行,并且自我展示是无可争议的。这提出了一个核心问题:我们可以通过伯特和VIT中的注意力机制揭示特洛伊木马吗?在本文中,我们调查了特洛伊木马AIS中的注意力劫持模式,当存在特定的触发器时,触发令牌``绑架''的注意力重量。我们观察到来自自然语言处理(NLP)和计算机视觉(CV)域的Trojan变形金刚中劫持模式的一致性劫持模式。这种有趣的财产有助于我们了解伯特和VIT中的特洛伊木马机制。我们还提出了一个关注的特洛伊木马检测器(AHTD),以将特洛伊木马与干净的AI区分开。
translated by 谷歌翻译